29 research outputs found

    Recent Decisions

    Get PDF
    Many inverse problems can be described by a PDE model with unknown parameters that need to be calibrated based on measurements related to its solution. This can be seen as a constrained minimization problem where one wishes to minimize the mismatch between the observed data and the model predictions, including an extra regularization term, and use the PDE as a constraint. Often, a suitable regularization parameter is determined by solving the problem for a whole range of parameters -- e.g. using the L-curve -- which is computationally very expensive. In this paper we derive two methods that simultaneously solve the inverse problem and determine a suitable value for the regularization parameter. The first one is a direct generalization of the Generalized Arnoldi Tikhonov method for linear inverse problems. The second method is a novel method based on similar ideas, but with a number of advantages for nonlinear problems

    Projected Newton Method for noise constrained Tikhonov regularization

    Full text link
    Tikhonov regularization is a popular approach to obtain a meaningful solution for ill-conditioned linear least squares problems. A relatively simple way of choosing a good regularization parameter is given by Morozov's discrepancy principle. However, most approaches require the solution of the Tikhonov problem for many different values of the regularization parameter, which is computationally demanding for large scale problems. We propose a new and efficient algorithm which simultaneously solves the Tikhonov problem and finds the corresponding regularization parameter such that the discrepancy principle is satisfied. We achieve this by formulating the problem as a nonlinear system of equations and solving this system using a line search method. We obtain a good search direction by projecting the problem onto a low dimensional Krylov subspace and computing the Newton direction for the projected problem. This projected Newton direction, which is significantly less computationally expensive to calculate than the true Newton direction, is then combined with a backtracking line search to obtain a globally convergent algorithm, which we refer to as the Projected Newton method. We prove convergence of the algorithm and illustrate the improved performance over current state-of-the-art solvers with some numerical experiments

    Exploring variable accuracy storage through lossy compression techniques in numerical linear algebra: a first application to flexible GMRES

    Get PDF
    Large scale applications running on HPC systems often require a substantial amount of memory and can have a large computational overhead. Lossy data compression techniques can reduce the size of the data and associated communication cost, but the effect of the loss ofaccuracy on the numerical algorithm can be hard to predict. In this paper we examine the FGMRES algorithm, which requires the storage of a basis for the Krylov subspace and for the search space spanned by the solutions of the preconditioning systems. We show that the vectors spanning this search space can be compressed by looking at the combination of FGMRES and compression in the context of inexact Krylov subspace methods. This allows us to derive a bound on the normwise relative compression error in each iteration. We use this bound to formulate a number of different practical compression strategies, and validate and compare them through numerical experiments.Les applications Ă  grande Ă©chelle fonctionnant sur des systĂšmes HPC nĂ©cessitent souvent une quantitĂ© importante de mĂ©moire et peuvent avoir une charge de calcul importante.Les techniques de compression de donnĂ©es avec perte peuvent rĂ©duire la taille des donnĂ©es et les coĂ»ts de communication associĂ©s, mais l’effet de la perte de prĂ©cision sur l’algorithme numĂ©rique peut ĂȘtre difficile Ă  prĂ©voir. Dans cet article, nous examinons l’algorithme FGMRES, qui nĂ©cessite le stockage d’une base pour le sous-espace de Krylov et pour l’espace de recherche couvert parles solutions des systĂšmes de prĂ©conditionnement. Nous montrons que les vecteurs couvrant cet espace de recherche peuvent ĂȘtre comprimĂ©s en examinant la combinaison de FGMRES et de la compression dans le contexte des mĂ©thodes inexactes du sous-espace de Krylov. Cela nous permet de dĂ©river une borne sur l’erreur de compression relative normale dans chaque itĂ©ration. Nous utilisons cette limite pour formuler un certain nombre de stratĂ©gies de compression pratiques diffĂ©rentes, et les valider et les comparer par des expĂ©riences numĂ©riques

    A complementary note on soft errors in the Conjugate Gradient method: the persistent error case

    Get PDF
    This note is a follow up study to [1], where we studied the resilience of the preconditioned conjugate gradient method (PCG). We complement the original work by performinga similar series of numerical experiments, but using what we called persistent instead of transient bit-flips.Cette note est une Ă©tude qui fait suite Ă  [1], oĂč nous avons Ă©tudiĂ© la rĂ©silience de la mĂ©thode du gradient conjuguĂ© prĂ©conditionnĂ© (PCG). Nous complĂ©tons le travail initial en effectuant une sĂ©rie similaire d’expĂ©riences numĂ©riques, mais en utilisant ce que nous avons appelĂ© des bit-flips persistants au lieu de transitoires

    Les variantes rétro-stables de GMRES en précision variable

    Get PDF
    In the context where the representation of the data is decoupled from the arithmetic used to process them, we investigate the backward stability of two backward-stable implementations of the GMRES method, namely the so-calledModified Gram-Schmidt (MGS) and the Householder variants. Considering data may be compressed to alleviate the memory footprint, we are interested in the situation where the leading part of the rounding error is related to the datarepresentation. When the data representation of vectors introduces componentwise perturbations, we show that the existing backward stability analyses of MGS-GMRES and Householder-GMRES still apply. We illustrate this backward stability property in a practical context where an agnostic lossy compressor is employed and enables the reduction of the memory requirement to store the orthonormal Arnoldi basis or the Householder reflectors. Although technical arguments of the theoretical backward stability proofs do not readily apply to the situation where only the normwise relative perturbations of the vector storage can be controlled, we show experimentally that the backward stability is maintained; that is, the attainable normwise backward error is of the same order as the normwise perturbations induced by the data storage. We illustrate it with numerical experiments in two practical different contexts. The first one corresponds to the use of an agnostic compressor where vector compression is controlled normwise. The second one arises in the solution of tensor linear systems, where low-rank tensor approximations based on Tensor-Train is considered to tackle the curse of dimensionality.Dans le contexte oĂč la reprĂ©sentation des donnĂ©es est dĂ©couplĂ©e de l’arithmĂ©tique utilisĂ©e pour les traiter, nous Ă©tudions la stabilitĂ© inverse des deux implĂ©mentations stables de la mĂ©thode GMRES, Ă  savoir la variante dite Modified Gram-Schmidt (MGS) et la variante Householder. ConsidĂ©rant que les donnĂ©es peuvent ĂȘtre compressĂ©es pour rĂ©duire l’empreinte mĂ©moire, nous nous intĂ©ressons Ă  la situation oĂč la partie principale de l’erreur d’arrondi est liĂ©e Ă  la reprĂ©sentation des donnĂ©es. Lorsque la reprĂ©sentation des donnĂ©es des vecteurs introduit des perturbations par composantes, les analyses de stabilitĂ© inverse existantes de MGS-GMRES [27] et Householder-GMRES [15] restent applicables. Nous illustrons cette propriĂ©tĂ© de stabilitĂ© dans un contexte pratique pratique oĂč un compresseur agnostique Ă  perte est utilisĂ© et permet de rĂ©duire la mĂ©moire nĂ©cessaire pour stocker la base orthonormale d’Arnoldi ou les rĂ©flecteurs de Householder. Bien que les arguments techniques des preuves thĂ©oriques de de stabilitĂ© inversene s’appliquent pas facilement Ă  la situation oĂč seules les perturbations relatives en norme sont utilisĂ©es, nous montrons expĂ©rimentalement que la stabilitĂ© inverse est maintenue ; c’est-Ă -dire que l’erreur inverse atteignable est du mĂȘme ordre que les perturbations normalisĂ©es induites par le stockage des donnĂ©es. Nous rapportons des expĂ©riences numĂ©riques dans deux contextes pratiques diffĂ©rents. Le premier correspond Ă  l’utilisation d’un compresseur agnostique. Le deuxiĂšme se prĂ©sente dans la rĂ©solution de systĂšmes linĂ©aires tensoriels, dĂ©finis sur un produit tensoriel d’espaces linĂ©aires, oĂč les approximations tensorielles Ă  faible rang basĂ©es sur Tensor-Train [26] est envisagĂ©e pour lutter contre la malĂ©diction de la dimensionnalitĂ©

    On soft errors in the conjugate gradient method: sensitivity and robust numerical detection

    Get PDF
    International audienceThe conjugate gradient (CG) method is the most widely used iterative scheme for the solution of large sparse systems of linear equations when the matrix is symmetric positive definite. Although more than 60 years old, it is still a serious candidate for extreme-scale computations on large computing platforms. On the technological side, the continuous shrinking of transistor geometry and the increasing complexity of these devices affect dramatically their sensitivity to natural radiation and thus diminish their reliability. One of the most common effects produced by natural radiation is the single event upset which consists in a bit-flip in a memory cell producing unexpected results at the application level. Consequently, future extreme-scale computing facilities will be more prone to errors of any kind, including bit-flips, during their calculations. These numerical and technological observations are the main motivations for this work, where we first investigate through extensive numerical experiments the sensitivity of CG to bit-flips in its main computationally intensive kernels, namely the matrix-vector product and the preconditioner application. We further propose numerical criteria to detect the occurrence of such soft errors and assess their robustness through extensive numerical experiments

    On soft errors in the Conjugate Gradient method: sensitivity and robust numerical detection -revised

    Get PDF
    The conjugate gradient (CG) method is the most widely used iterative scheme forthe solution of large sparse systems of linear equations when the matrix is symmetric positivedefinite. Although more than sixty year old, it is still a serious candidate for extreme-scalecomputation on large computing platforms. On the technological side, the continuous shrinkingof transistor geometry and the increasing complexity of these devices affect dramatically theirsensitivity to natural radiation, and thus diminish their reliability. One of the most common effectsproduced by natural radiation is the single event upset which consists in a bit-flip in a memory cellproducing unexpected results at application level. Consequently, the future computing facilitiesat extreme scale might be more prone to errors of any kind including bit-flip during calculation.These numerical and technological observations are the main motivations for this work, where wefirst investigate through extensive numerical experiments the sensitivity of CG to bit-flips in itsmain computationally intensive kernels, namely the matrix-vector product and the preconditionerapplication. We further propose numerical criteria to detect the occurrence of such soft errors; weassess their robustness through extensive numerical experiments.La mĂ©thode du gradient conjugue (CG) est la mĂ©thode itĂ©rative la plus utilisĂ©e pour rĂ©soudre des systĂšmes linĂ©aires creux de grande taille lorsque la matrice est symĂ©trique dĂ©finie positive. Bien que vieille de de soixante ans, cette mĂ©thode reste une candidate sĂ©rieuse pour ĂȘtre mise en Ɠuvre pour la rĂ©solution de trĂšs grands systĂšmes linĂ©aires sur des plateformes de calcul de trĂšs grande taille. Sur le plan technologique, la rĂ©duction permanente de la taille et la complexitĂ© croissante des composantes Ă©lectroniques de ces calculateurs affecte dramatiquement leur sensibilitĂ© aux radiations cosmiques ce qui rĂ©duit leur fiabilitĂ©. L’un des effets les plus courants des rayonnements naturels est la perturbation due Ă  un Ă©vĂ©nement unique qui consiste en un retournement de bit dans une cellule mĂ©moire produisant des rĂ©sultats inattendus au niveau de l’application. Par consĂ©quent, les futures installations informatiques Ă  trĂšs grande Ă©chelle pourraient ĂȘtre plus sujettes Ă  des erreurs de toute sorte. y compris le basculement de bit pendant le calcul. Ces observations numĂ©riques et technologiques sont les suivantes les principales motivations de ce travail, pour lequel nous Ă©tudions d’abord par le biais d’études approfondies et approfondies la sensibilitĂ© de la CG aux sauts de bits dans ses principaux domaines d’application.Ă  forte intensitĂ© de calcul, Ă  savoir le produit matrice-vecteur et le produit application du prĂ©conditionneur. Nous proposons en outre des critĂšres numĂ©riques pour dĂ©tecter l’apparition de tels dĂ©fauts ; nous Ă©valuons leur robustesse Ă  travers des expĂ©riences numĂ©riques approfondies

    Investigating the validity of the DN4 in a consecutive population of patients with chronic pain

    Get PDF
    Neuropathic pain is clinically described as pain caused by a lesion or disease of the somatosensory nervous system. The aim of this study was to assess the validity of the Dutch version of the DN4, in a cross-sectional multicentre design, as a screening tool for detecting a neuropathic pain component in a large consecutive, not pre-stratified on basis of the target outcome, population of patients with chronic pain. Patients’ pain was classified by two independent (pain-)physicians as the gold standard. The analysis was initially performed on the outcomes of those patients (n = 228 out of 291) in whom both physicians agreed in their pain classification. Compared to the gold standard the DN4 had a sensitivity of 75% and specificity of 76%. The DN4-symptoms (seven interview items) solely resulted in a sensitivity of 70% and a specificity of 67%. For the DN4-signs (three examination items) it was respectively 75% and 75%. In conclusion, because it seems that the DN4 helps to identify a neuropathic pain component in a consecutive population of patients with chronic pain in a moderate way, a comprehensive (physical-) examination by the physician is still obligate

    Avoiding Catch-22:Validating the PainDETECT in a in a population of patients with chronic pain

    Get PDF
    BACKGROUND: Neuropathic pain is defined as pain caused by a lesion or disease of the somatosensory nervous system and is a major therapeutic challenge. Several screening tools have been developed to help physicians detect patients with neuropathic pain. These have typically been validated in populations pre-stratified for neuropathic pain, leading to a so called "Catch-22 situation:" "a problematic situation for which the only solution is denied by a circumstance inherent in the problem or by a rule". The validity of screening tools needs to be proven in patients with pain who were not pre-stratified on basis of the target outcome: neuropathic pain or non-neuropathic pain. This study aims to assess the validity of the Dutch PainDETECT (PainDETECT-Dlv) in a large population of patients with chronic pain. METHODS: A cross-sectional multicentre design was used to assess PainDETECT-Dlv validity. Included where patients with low back pain radiating into the leg(s), patients with neck-shoulder-arm pain and patients with pain due to a suspected peripheral nerve damage. Patients' pain was classified as having a neuropathic pain component (yes/no) by two experienced physicians ("gold standard"). Physician opinion based on the Grading System was a secondary comparison. RESULTS: In total, 291 patients were included. Primary analysis was done on patients where both physicians agreed upon the pain classification (n = 228). Compared to the physician's classification, PainDETECT-Dlv had a sensitivity of 80% and specificity of 55%, versus the Grading System it achieved 74 and 46%. CONCLUSION: Despite its internal consistency and test-retest reliability the PainDETECT-Dlv is not an effective screening tool for a neuropathic pain component in a population of patients with chronic pain because of its moderate sensitivity and low specificity. Moreover, the indiscriminate use of the PainDETECT-Dlv as a surrogate for clinical assessment should be avoided in daily clinical practice as well as in (clinical-) research. Catch-22 situations in the validation of screening tools can be prevented by not pre-stratifying the patients on basis of the target outcome before inclusion in a validation study for screening instruments. TRIAL REGISTRATION: The protocol was registered prospectively in the Dutch National Trial Register: NTR 3030
    corecore